In this project, I’m going to use Machine Learning to Optimize the Product Backorders.

Introduction

Backorder is an order which has not been fulfilled yet by company. It indicates the interest of consumer in the product even though the product is in short amount. This is both and good for the company. Good because it shows customer is still interested in the product and demands for it. Bad because if not fulfilled in time the consumer may lose interest, look for alternative product which will result in the loss of company, losing customers and image of the company may be distorted.

Now, what company can do is built so many of the products that there won’t be shortage. But most of the companies can’t do it because of the high inventory cost. And if demand decreases, the will suffer quite a loss.

So, it is better to look at the past data and optimize the current backorder such that the inventory cost is low, product is delivered in time before the conumer loses the interest. This will good for both consumers who get the product they want with only little wait and for company which retains the customers and make profit.

There are a lot of challenges in building the predictive model for optimization of backorders. There are lot of factors which doesn’t depend on the company, product or business but external factors like holiday, season, special occassions etc. So let’s see what we are going to do.

The data we are using here is obtained from Kaggle. You can obtain the data from here.

Loading required libraries

library(data.table)
library(tidyquant)
library(unbalanced)
library(randomForest)
library(caret)
library(h2o)

Reading the data

train <- read.csv("train.csv", na.strings = "")
test <- read.csv("test.csv", na.strings = "")

Let’s have a look at the data

str(train)
'data.frame':   1687861 obs. of  23 variables:
 $ sku              : Factor w/ 1687861 levels "(1687860 rows)",..: 2 3 4 5 6 7 8 9 10 11 ...
 $ national_inv     : int  0 2 2 7 8 13 1095 6 140 4 ...
 $ lead_time        : int  NA 9 NA 8 NA 8 NA 2 NA 8 ...
 $ in_transit_qty   : int  0 0 0 0 0 0 0 0 0 0 ...
 $ forecast_3_month : int  0 0 0 0 0 0 0 0 15 0 ...
 $ forecast_6_month : int  0 0 0 0 0 0 0 0 114 0 ...
 $ forecast_9_month : int  0 0 0 0 0 0 0 0 152 0 ...
 $ sales_1_month    : int  0 0 0 0 0 0 0 0 0 0 ...
 $ sales_3_month    : int  0 0 0 0 0 0 0 0 0 0 ...
 $ sales_6_month    : int  0 0 0 0 0 0 0 0 0 0 ...
 $ sales_9_month    : int  0 0 0 0 4 0 0 0 0 0 ...
 $ min_bank         : int  0 0 0 1 2 0 4 0 0 0 ...
 $ potential_issue  : Factor w/ 2 levels "No","Yes": 1 1 1 1 1 1 1 1 1 1 ...
 $ pieces_past_due  : int  0 0 0 0 0 0 0 0 0 0 ...
 $ perf_6_month_avg : num  -99 0.99 -99 0.1 -99 0.82 -99 0 -99 0.82 ...
 $ perf_12_month_avg: num  -99 0.99 -99 0.13 -99 0.87 -99 0 -99 0.87 ...
 $ local_bo_qty     : int  0 0 0 0 0 0 0 0 0 0 ...
 $ deck_risk        : Factor w/ 2 levels "No","Yes": 1 1 2 1 2 1 2 2 1 1 ...
 $ oe_constraint    : Factor w/ 2 levels "No","Yes": 1 1 1 1 1 1 1 1 1 1 ...
 $ ppap_risk        : Factor w/ 2 levels "No","Yes": 1 1 1 1 1 1 1 2 1 1 ...
 $ stop_auto_buy    : Factor w/ 2 levels "No","Yes": 2 2 2 2 2 2 2 2 2 2 ...
 $ rev_stop         : Factor w/ 2 levels "No","Yes": 1 1 1 1 1 1 1 1 1 1 ...
 $ went_on_backorder: Factor w/ 2 levels "No","Yes": 1 1 1 1 1 1 1 1 1 1 ...
str(test)
'data.frame':   242076 obs. of  23 variables:
 $ sku              : Factor w/ 242076 levels "(242075 rows)",..: 167 213 440 599 690 1042 1155 1195 1288 1407 ...
 $ national_inv     : int  62 9 17 9 2 15 0 28 2 2 ...
 $ lead_time        : int  NA NA 8 2 8 2 NA NA NA NA ...
 $ in_transit_qty   : int  0 0 0 0 0 0 0 0 0 0 ...
 $ forecast_3_month : int  0 0 0 0 0 0 0 0 0 0 ...
 $ forecast_6_month : int  0 0 0 0 0 0 0 0 0 0 ...
 $ forecast_9_month : int  0 0 0 0 0 0 0 0 0 0 ...
 $ sales_1_month    : int  0 0 0 0 0 0 0 0 0 0 ...
 $ sales_3_month    : int  0 0 0 0 0 0 0 0 0 0 ...
 $ sales_6_month    : int  0 0 0 0 0 1 0 0 0 0 ...
 $ sales_9_month    : int  0 0 0 2 0 2 0 0 0 0 ...
 $ min_bank         : int  1 1 0 0 0 0 0 0 0 0 ...
 $ potential_issue  : Factor w/ 2 levels "No","Yes": 1 1 1 1 1 1 1 1 1 1 ...
 $ pieces_past_due  : int  0 0 0 0 0 0 0 0 0 0 ...
 $ perf_6_month_avg : num  -99 -99 0.92 0.78 0.54 0.37 -99 -99 -99 -99 ...
 $ perf_12_month_avg: num  -99 -99 0.95 0.75 0.71 0.68 -99 -99 -99 -99 ...
 $ local_bo_qty     : int  0 0 0 0 0 0 0 0 0 0 ...
 $ deck_risk        : Factor w/ 2 levels "No","Yes": 2 1 1 1 1 1 1 2 2 1 ...
 $ oe_constraint    : Factor w/ 2 levels "No","Yes": 1 1 1 1 1 1 1 1 1 1 ...
 $ ppap_risk        : Factor w/ 2 levels "No","Yes": 1 2 1 2 1 1 1 1 2 1 ...
 $ stop_auto_buy    : Factor w/ 2 levels "No","Yes": 2 1 2 2 2 2 2 2 2 2 ...
 $ rev_stop         : Factor w/ 2 levels "No","Yes": 1 1 1 1 1 1 1 1 1 1 ...
 $ went_on_backorder: Factor w/ 2 levels "No","Yes": 1 1 1 1 1 1 1 1 1 1 ...

Let’s have a look at the target variable

table(train$went_on_backorder)

     No     Yes 
1676567   11293 

As we can see the data is highly unbalanced data. Since, we are focused on optimizing the backorder, we need more occurance of backorder in our train data. So, we need to balance the data.

Data Pre-Processing

summary(train)
             sku           national_inv        lead_time      in_transit_qty     forecast_3_month   
 (1687860 rows):      1   Min.   :  -27256   Min.   : 0.00    Min.   :     0.0   Min.   :      0.0  
 1026827       :      1   1st Qu.:       4   1st Qu.: 4.00    1st Qu.:     0.0   1st Qu.:      0.0  
 1043384       :      1   Median :      15   Median : 8.00    Median :     0.0   Median :      0.0  
 1043696       :      1   Mean   :     496   Mean   : 7.87    Mean   :    44.1   Mean   :    178.1  
 1043852       :      1   3rd Qu.:      80   3rd Qu.: 9.00    3rd Qu.:     0.0   3rd Qu.:      4.0  
 1044048       :      1   Max.   :12334404   Max.   :52.00    Max.   :489408.0   Max.   :1427612.0  
 (Other)       :1687855   NA's   :1          NA's   :100894   NA's   :1          NA's   :1          
 forecast_6_month  forecast_9_month  sales_1_month      sales_3_month     sales_6_month      
 Min.   :      0   Min.   :      0   Min.   :     0.0   Min.   :      0   Min.   :      0.0  
 1st Qu.:      0   1st Qu.:      0   1st Qu.:     0.0   1st Qu.:      0   1st Qu.:      0.0  
 Median :      0   Median :      0   Median :     0.0   Median :      1   Median :      2.0  
 Mean   :    345   Mean   :    506   Mean   :    55.9   Mean   :    175   Mean   :    341.7  
 3rd Qu.:     12   3rd Qu.:     20   3rd Qu.:     4.0   3rd Qu.:     15   3rd Qu.:     31.0  
 Max.   :2461360   Max.   :3777304   Max.   :741774.0   Max.   :1105478   Max.   :2146625.0  
 NA's   :1         NA's   :1         NA's   :1          NA's   :1         NA's   :1          
 sales_9_month        min_bank         potential_issue pieces_past_due     perf_6_month_avg 
 Min.   :      0   Min.   :     0.00   No  :1686953    Min.   :     0.00   Min.   :-99.000  
 1st Qu.:      0   1st Qu.:     0.00   Yes :    907    1st Qu.:     0.00   1st Qu.:  0.630  
 Median :      4   Median :     0.00   NA's:      1    Median :     0.00   Median :  0.820  
 Mean   :    525   Mean   :    52.77                   Mean   :     2.04   Mean   : -6.872  
 3rd Qu.:     47   3rd Qu.:     3.00                   3rd Qu.:     0.00   3rd Qu.:  0.970  
 Max.   :3205172   Max.   :313319.00                   Max.   :146496.00   Max.   :  1.000  
 NA's   :1         NA's   :1                           NA's   :1           NA's   :1        
 perf_12_month_avg  local_bo_qty       deck_risk      oe_constraint  ppap_risk      stop_auto_buy 
 Min.   :-99.000   Min.   :    0.000   No  :1300377   No  :1687615   No  :1484026   No  :  61086  
 1st Qu.:  0.660   1st Qu.:    0.000   Yes : 387483   Yes :    245   Yes : 203834   Yes :1626774  
 Median :  0.810   Median :    0.000   NA's:      1   NA's:      1   NA's:      1   NA's:      1  
 Mean   : -6.438   Mean   :    0.626                                                              
 3rd Qu.:  0.950   3rd Qu.:    0.000                                                              
 Max.   :  1.000   Max.   :12530.000                                                              
 NA's   :1         NA's   :1                                                                      
 rev_stop       went_on_backorder
 No  :1687129   No  :1676567     
 Yes :    731   Yes :  11293     
 NA's:      1   NA's:      1     
                                 
                                 
                                 
                                 
head(train, 10)
tail(train)
train <- train[-nrow(train),-1]
test <- test[-nrow(test),-1]
Dealing with NAs
train$lead_time <- ifelse(is.na(train$lead_time), -1, train$lead_time )
train$went_on_backorder <- ifelse(train$went_on_backorder=="Yes",1,0)
test$lead_time <- ifelse(is.na(test$lead_time), -1, test$lead_time )
test$went_on_backorder <- ifelse(test$went_on_backorder=="Yes",1,0)
train$went_on_backorder <- as.factor(train$went_on_backorder)
test$went_on_backorder <- as.factor(test$went_on_backorder)
table(is.na(train))

   FALSE 
37132920 
table(is.na(test))

  FALSE 
5325650 

Now, NA is gone, let’s balance data. There are many ways to deal with unbalanced data. Here, I’m using SMOTE.

train_bal <-  ubSMOTE(train[,-22], train[,22], perc.over = 200, perc.under = 200, k=5)
rm(c(train,train_bal))
Error in rm(c(train, train_bal)) : 
  ... must contain names or character strings

Now, we can see the data is quite balanced. Also, as a added benefit, the data size has been reduced hugely which will make our training faster.

Modeling

Now, let’s move to the modeling.

Random Forest model
fit.rf <- randomForest(went_on_backorder~., data = train_final)
pred.rf <- predict(fit.rf,test[,-22])
caret::confusionMatrix(test$went_on_backorder,pred.rf)
Confusion Matrix and Statistics

          Reference
Prediction      0      1
         0 229958   9429
         1   1057   1631
                                          
               Accuracy : 0.9567          
                 95% CI : (0.9559, 0.9575)
    No Information Rate : 0.9543          
    P-Value [Acc > NIR] : 9.129e-09       
                                          
                  Kappa : 0.2234          
 Mcnemar's Test P-Value : < 2.2e-16       
                                          
            Sensitivity : 0.9954          
            Specificity : 0.1475          
         Pos Pred Value : 0.9606          
         Neg Pred Value : 0.6068          
             Prevalence : 0.9543          
         Detection Rate : 0.9499          
   Detection Prevalence : 0.9889          
      Balanced Accuracy : 0.5714          
                                          
       'Positive' Class : 0               
                                          

We can see the accuracy is pretty good, but we need to see other metrics.

We will now train the model using H2O. It provides professional grade ML and scalibility. It also has a auto.ml function to automatically train model without providing a specific algorithm.

h2o.init()

H2O is not running yet, starting it now...

Note:  In case of errors look at the following log files:
    C:\Users\Vivek\AppData\Local\Temp\Rtmp4me2Jh/h2o_Vivek_started_from_r.out
    C:\Users\Vivek\AppData\Local\Temp\Rtmp4me2Jh/h2o_Vivek_started_from_r.err

java version "1.8.0_77"
Java(TM) SE Runtime Environment (build 1.8.0_77-b03)
Java HotSpot(TM) 64-Bit Server VM (build 25.77-b03, mixed mode)

Starting H2O JVM and connecting: ............. Connection successful!

R is connected to the H2O cluster: 
    H2O cluster uptime:         26 seconds 995 milliseconds 
    H2O cluster version:        3.16.0.2 
    H2O cluster version age:    22 days  
    H2O cluster name:           H2O_started_from_R_Vivek_szp782 
    H2O cluster total nodes:    1 
    H2O cluster total memory:   1.74 GB 
    H2O cluster total cores:    0 
    H2O cluster allowed cores:  0 
    H2O cluster healthy:        TRUE 
    H2O Connection ip:          localhost 
    H2O Connection port:        54321 
    H2O Connection proxy:       NA 
    H2O Internal Security:      FALSE 
    H2O API Extensions:         Algos, AutoML, Core V3, Core V4 
    R Version:                  R version 3.4.1 (2017-06-30) 

Let’s create a validation dataset. Since, H2O deals with H2OFrame, we need to convert our data to that format.

test_h2o <- as.h2o(test)

  |                                                                                                  
  |                                                                                            |   0%
  |                                                                                                  
  |============================================================================================| 100%

Now, that we have transformed the data, we are going to use automl function from h2o package to train model.

y <- "went_on_backorder"
x <- setdiff(names(train_h2o), y)
models_h2o <- h2o.automl(x=x, y=y, training_frame = train_h2o, validation_frame = valid_h2o, leaderboard_frame = test_h2o, max_runtime_secs = 60)

  |                                                                                                  
  |                                                                                            |   0%
  |                                                                                                  
  |===                                                                                         |   3%
  |                                                                                                  
  |=====                                                                                       |   5%
  |                                                                                                  
  |=====                                                                                       |   6%
  |                                                                                                  
  |======                                                                                      |   6%
  |                                                                                                  
  |======                                                                                      |   7%
  |                                                                                                  
  |=======                                                                                     |   7%
  |                                                                                                  
  |=======                                                                                     |   8%
  |                                                                                                  
  |========                                                                                    |   8%
  |                                                                                                  
  |========                                                                                    |   9%
  |                                                                                                  
  |=========                                                                                   |   9%
  |                                                                                                  
  |=========                                                                                   |  10%
  |                                                                                                  
  |==========                                                                                  |  11%
  |                                                                                                  
  |===========                                                                                 |  12%
  |                                                                                                  
  |============                                                                                |  13%
  |                                                                                                  
  |============                                                                                |  14%
  |                                                                                                  
  |=============                                                                               |  14%
  |                                                                                                  
  |==============                                                                              |  15%
  |                                                                                                  
  |=======================================================================================     |  95%
  |                                                                                                  
  |============================================================================================| 100%

  |                                                                                                  
  |                                                                                            |   0%
  |                                                                                                  
  |============================================================================================| 100%

Model has been trained. Let’s extract our model.

Let’s predict using the model.

We have obtained the predictions. Now, H2O provides a function h2o.performance which can help to assess the perdormance. Let’t try.

h2o.metric(performance.h2o)
Metrics for Thresholds: Binomial metrics as a function of classification thresholds
  threshold       f1       f2 f0point5 accuracy precision   recall specificity absolute_mcc
1  0.987870 0.018861 0.012014 0.043860 0.988826  0.376812 0.009673    0.999820     0.058929
2  0.983998 0.030151 0.019355 0.068182 0.988838  0.428571 0.015625    0.999766     0.080174
3  0.980809 0.042837 0.027951 0.091647 0.988739  0.381250 0.022693    0.999586     0.090842
4  0.977976 0.047983 0.031536 0.100291 0.988689  0.367021 0.025670    0.999503     0.094691
5  0.974629 0.060955 0.040846 0.120064 0.988545  0.339623 0.033482    0.999269     0.103784
  min_per_class_accuracy mean_per_class_accuracy    tns  fns fps tps      tnr      fnr      fpr
1               0.009673                0.504746 239344 2662  43  26 0.999820 0.990327 0.000180
2               0.015625                0.507696 239331 2646  56  42 0.999766 0.984375 0.000234
3               0.022693                0.511140 239288 2627  99  61 0.999586 0.977307 0.000414
4               0.025670                0.512586 239268 2619 119  69 0.999503 0.974330 0.000497
5               0.033482                0.516376 239212 2598 175  90 0.999269 0.966518 0.000731
       tpr idx
1 0.009673   0
2 0.015625   1
3 0.022693   2
4 0.025670   3
5 0.033482   4

---
    threshold       f1       f2 f0point5 accuracy precision   recall specificity absolute_mcc
395  0.012465 0.025681 0.061815 0.016207 0.159963  0.013008 0.997024    0.150564     0.043441
396  0.012042 0.024471 0.059008 0.015436 0.116340  0.012387 0.998140    0.106439     0.035706
397  0.011616 0.023890 0.057655 0.015067 0.094305  0.012090 0.998140    0.084157     0.031217
398  0.011262 0.022855 0.055241 0.014408 0.051558  0.011560 0.998884    0.040921     0.021165
399  0.010847 0.022236 0.053796 0.014015 0.023848  0.011243 0.999628    0.012891     0.011692
400  0.010107 0.021964 0.053159 0.013842 0.011104  0.011104 1.000000    0.000000     0.000000
    min_per_class_accuracy mean_per_class_accuracy   tns fns    fps  tps      tnr      fnr      fpr
395               0.150564                0.573794 36043   8 203344 2680 0.150564 0.002976 0.849436
396               0.106439                0.552289 25480   5 213907 2683 0.106439 0.001860 0.893561
397               0.084157                0.541148 20146   5 219241 2683 0.084157 0.001860 0.915843
398               0.040921                0.519903  9796   3 229591 2685 0.040921 0.001116 0.959079
399               0.012891                0.506260  3086   1 236301 2687 0.012891 0.000372 0.987109
400               0.000000                0.500000     0   0 239387 2688 0.000000 0.000000 1.000000
         tpr idx
395 0.997024 394
396 0.998140 395
397 0.998140 396
398 0.998884 397
399 0.999628 398
400 1.000000 399

Let’s see the AUC metric which is widely used in the business and challenges like Kaggle’s.

h2o.auc(performance.h2o)
[1] 0.9130247

It is 91% which is very good considering the minimum effort put in.

So, we saw how h2o can help us get good performing model. It is also scalable so you can put the model in production. Also, we saw how to handle imbalanced dataset whenever required.

LS0tDQp0aXRsZTogIlByb2R1Y3QgQmFja29yZGVyIE9wdGltaXphdGlvbiINCm91dHB1dDogaHRtbF9ub3RlYm9vaw0KLS0tDQoNCkluIHRoaXMgcHJvamVjdCwgSSdtIGdvaW5nIHRvIHVzZSBNYWNoaW5lIExlYXJuaW5nIHRvIE9wdGltaXplIHRoZSBQcm9kdWN0IEJhY2tvcmRlcnMuDQoNCiMjSW50cm9kdWN0aW9uDQoNCjxiPkJhY2tvcmRlcjwvYj4gaXMgYW4gb3JkZXIgd2hpY2ggaGFzIG5vdCBiZWVuIGZ1bGZpbGxlZCB5ZXQgYnkgY29tcGFueS4gSXQgaW5kaWNhdGVzIHRoZSBpbnRlcmVzdCBvZiBjb25zdW1lciBpbiB0aGUgcHJvZHVjdCBldmVuIHRob3VnaCB0aGUgcHJvZHVjdCBpcyBpbiBzaG9ydCBhbW91bnQuIFRoaXMgaXMgYm90aCBhbmQgZ29vZCBmb3IgdGhlIGNvbXBhbnkuIEdvb2QgYmVjYXVzZSBpdCBzaG93cyBjdXN0b21lciBpcyBzdGlsbCBpbnRlcmVzdGVkIGluIHRoZSBwcm9kdWN0IGFuZCBkZW1hbmRzIGZvciBpdC4gQmFkIGJlY2F1c2UgaWYgbm90IGZ1bGZpbGxlZCBpbiB0aW1lIHRoZSBjb25zdW1lciBtYXkgbG9zZSBpbnRlcmVzdCwgbG9vayBmb3IgYWx0ZXJuYXRpdmUgcHJvZHVjdCB3aGljaCB3aWxsIHJlc3VsdCBpbiB0aGUgbG9zcyBvZiBjb21wYW55LCBsb3NpbmcgY3VzdG9tZXJzIGFuZCBpbWFnZSBvZiB0aGUgY29tcGFueSBtYXkgYmUgZGlzdG9ydGVkLg0KDQpOb3csIHdoYXQgY29tcGFueSBjYW4gZG8gaXMgYnVpbHQgc28gbWFueSBvZiB0aGUgcHJvZHVjdHMgdGhhdCB0aGVyZSB3b24ndCBiZSBzaG9ydGFnZS4gQnV0IG1vc3Qgb2YgdGhlIGNvbXBhbmllcyBjYW4ndCBkbyBpdCBiZWNhdXNlIG9mIHRoZSBoaWdoIGludmVudG9yeSBjb3N0LiBBbmQgaWYgZGVtYW5kIGRlY3JlYXNlcywgdGhlIHdpbGwgc3VmZmVyIHF1aXRlIGEgbG9zcy4NCg0KU28sIGl0IGlzIGJldHRlciB0byBsb29rIGF0IHRoZSBwYXN0IGRhdGEgYW5kIG9wdGltaXplIHRoZSBjdXJyZW50IGJhY2tvcmRlciBzdWNoIHRoYXQgdGhlIGludmVudG9yeSBjb3N0IGlzIGxvdywgcHJvZHVjdCBpcyBkZWxpdmVyZWQgaW4gdGltZSBiZWZvcmUgdGhlIGNvbnVtZXIgbG9zZXMgdGhlIGludGVyZXN0LiBUaGlzIHdpbGwgZ29vZCBmb3IgYm90aCBjb25zdW1lcnMgd2hvIGdldCB0aGUgcHJvZHVjdCB0aGV5IHdhbnQgd2l0aCBvbmx5IGxpdHRsZSB3YWl0IGFuZCBmb3IgY29tcGFueSB3aGljaCByZXRhaW5zIHRoZSBjdXN0b21lcnMgYW5kIG1ha2UgcHJvZml0Lg0KDQpUaGVyZSBhcmUgYSBsb3Qgb2YgY2hhbGxlbmdlcyBpbiBidWlsZGluZyB0aGUgcHJlZGljdGl2ZSBtb2RlbCBmb3Igb3B0aW1pemF0aW9uIG9mIGJhY2tvcmRlcnMuIFRoZXJlIGFyZSBsb3Qgb2YgZmFjdG9ycyB3aGljaCBkb2Vzbid0IGRlcGVuZCBvbiB0aGUgY29tcGFueSwgcHJvZHVjdCBvciBidXNpbmVzcyBidXQgZXh0ZXJuYWwgZmFjdG9ycyBsaWtlIGhvbGlkYXksIHNlYXNvbiwgc3BlY2lhbCBvY2Nhc3Npb25zIGV0Yy4gU28gbGV0J3Mgc2VlIHdoYXQgd2UgYXJlIGdvaW5nIHRvIGRvLg0KDQpUaGUgZGF0YSB3ZSBhcmUgdXNpbmcgaGVyZSBpcyBvYnRhaW5lZCBmcm9tIEthZ2dsZS4gWW91IGNhbiBvYnRhaW4gdGhlIGRhdGEgZnJvbSBbaGVyZV0oaHR0cHM6Ly93d3cua2FnZ2xlLmNvbS90aXJlZGdlZWsvcHJlZGljdC1iby10cmlhbC8pLg0KDQojIyMjTG9hZGluZyByZXF1aXJlZCBsaWJyYXJpZXMNCg0KYGBge3J9DQpsaWJyYXJ5KGRhdGEudGFibGUpDQpsaWJyYXJ5KHRpZHlxdWFudCkNCmxpYnJhcnkodW5iYWxhbmNlZCkNCmxpYnJhcnkocmFuZG9tRm9yZXN0KQ0KbGlicmFyeShjYXJldCkNCmxpYnJhcnkoaDJvKQ0KYGBgDQoNCiMjIyNSZWFkaW5nIHRoZSBkYXRhDQpgYGB7cn0NCnRyYWluIDwtIHJlYWQuY3N2KCJ0cmFpbi5jc3YiLCBuYS5zdHJpbmdzID0gIiIpDQp0ZXN0IDwtIHJlYWQuY3N2KCJ0ZXN0LmNzdiIsIG5hLnN0cmluZ3MgPSAiIikNCmBgYA0KDQpMZXQncyBoYXZlIGEgbG9vayBhdCB0aGUgZGF0YQ0KDQpgYGB7cn0NCnN0cih0cmFpbikNCnN0cih0ZXN0KQ0KYGBgDQoNCkxldCdzIGhhdmUgYSBsb29rIGF0IHRoZSB0YXJnZXQgdmFyaWFibGUgDQoNCmBgYHtyfQ0KdGFibGUodHJhaW4kd2VudF9vbl9iYWNrb3JkZXIpDQpgYGANCkFzIHdlIGNhbiBzZWUgdGhlIGRhdGEgaXMgaGlnaGx5IHVuYmFsYW5jZWQgZGF0YS4gU2luY2UsIHdlIGFyZSBmb2N1c2VkIG9uIG9wdGltaXppbmcgdGhlIGJhY2tvcmRlciwgd2UgbmVlZCBtb3JlIG9jY3VyYW5jZSBvZiBiYWNrb3JkZXIgaW4gb3VyIHRyYWluIGRhdGEuIFNvLCB3ZSBuZWVkIHRvIGJhbGFuY2UgdGhlIGRhdGEuDQoNCg0KIyMjI0RhdGEgUHJlLVByb2Nlc3NpbmcNCmBgYHtyfQ0Kc3VtbWFyeSh0cmFpbikNCmBgYA0KYGBge3J9DQpoZWFkKHRyYWluLCAxMCkNCmBgYA0KDQpgYGB7cn0NCnRhaWwodHJhaW4pDQpgYGANCg0KYGBge3J9DQp0cmFpbiA8LSB0cmFpblstbnJvdyh0cmFpbiksLTFdDQp0ZXN0IDwtIHRlc3RbLW5yb3codGVzdCksLTFdDQpgYGANCg0KIyMjIyMjRGVhbGluZyB3aXRoIE5Bcw0KYGBge3J9DQp0cmFpbiRsZWFkX3RpbWUgPC0gaWZlbHNlKGlzLm5hKHRyYWluJGxlYWRfdGltZSksIC0xLCB0cmFpbiRsZWFkX3RpbWUgKQ0KdHJhaW4kd2VudF9vbl9iYWNrb3JkZXIgPC0gaWZlbHNlKHRyYWluJHdlbnRfb25fYmFja29yZGVyPT0iWWVzIiwxLDApDQp0ZXN0JGxlYWRfdGltZSA8LSBpZmVsc2UoaXMubmEodGVzdCRsZWFkX3RpbWUpLCAtMSwgdGVzdCRsZWFkX3RpbWUgKQ0KdGVzdCR3ZW50X29uX2JhY2tvcmRlciA8LSBpZmVsc2UodGVzdCR3ZW50X29uX2JhY2tvcmRlcj09IlllcyIsMSwwKQ0KdHJhaW4kd2VudF9vbl9iYWNrb3JkZXIgPC0gYXMuZmFjdG9yKHRyYWluJHdlbnRfb25fYmFja29yZGVyKQ0KdGVzdCR3ZW50X29uX2JhY2tvcmRlciA8LSBhcy5mYWN0b3IodGVzdCR3ZW50X29uX2JhY2tvcmRlcikNCmBgYA0KDQpgYGB7cn0NCnRhYmxlKGlzLm5hKHRyYWluKSkNCnRhYmxlKGlzLm5hKHRlc3QpKQ0KYGBgDQoNCk5vdywgTkEgaXMgZ29uZSwgbGV0J3MgYmFsYW5jZSBkYXRhLiBUaGVyZSBhcmUgbWFueSB3YXlzIHRvIGRlYWwgd2l0aCB1bmJhbGFuY2VkIGRhdGEuIEhlcmUsIEknbSB1c2luZyBTTU9URS4NCg0KYGBge3J9DQp0cmFpbl9iYWwgPC0gIHViU01PVEUodHJhaW5bLC0yMl0sIHRyYWluWywyMl0sIHBlcmMub3ZlciA9IDIwMCwgcGVyYy51bmRlciA9IDIwMCwgaz01KQ0KYGBgDQoNCmBgYHtyfQ0KdHJhaW5fZmluYWwgPC0gY2JpbmQodHJhaW5fYmFsJFgsIHRyYWluX2JhbCRZKQ0KbmFtZXModHJhaW5fZmluYWwpWzIyXSA8LSAid2VudF9vbl9iYWNrb3JkZXIiDQp0YWJsZSh0cmFpbl9maW5hbCR3ZW50X29uX2JhY2tvcmRlcikNCnJtKHRyYWluLHRyYWluX2JhbCkNCmBgYA0KDQpOb3csIHdlIGNhbiBzZWUgdGhlIGRhdGEgaXMgcXVpdGUgYmFsYW5jZWQuIEFsc28sIGFzIGEgYWRkZWQgYmVuZWZpdCwgdGhlIGRhdGEgc2l6ZSBoYXMgYmVlbiByZWR1Y2VkIGh1Z2VseSB3aGljaCB3aWxsIG1ha2Ugb3VyIHRyYWluaW5nIGZhc3Rlci4NCg0KIyMjI01vZGVsaW5nDQoNCk5vdywgbGV0J3MgbW92ZSB0byB0aGUgbW9kZWxpbmcuDQoNCiMjIyMjI1JhbmRvbSBGb3Jlc3QgbW9kZWwNCg0KYGBge3J9DQpmaXQucmYgPC0gcmFuZG9tRm9yZXN0KHdlbnRfb25fYmFja29yZGVyfi4sIGRhdGEgPSB0cmFpbl9maW5hbCkNCnByZWQucmYgPC0gcHJlZGljdChmaXQucmYsdGVzdFssLTIyXSkNCmNhcmV0Ojpjb25mdXNpb25NYXRyaXgodGVzdCR3ZW50X29uX2JhY2tvcmRlcixwcmVkLnJmKQ0KYGBgDQoNCldlIGNhbiBzZWUgdGhlIGFjY3VyYWN5IGlzIHByZXR0eSBnb29kLCBidXQgd2UgbmVlZCB0byBzZWUgb3RoZXIgbWV0cmljcy4NCg0KV2Ugd2lsbCBub3cgdHJhaW4gdGhlIG1vZGVsIHVzaW5nIEgyTy4gSXQgcHJvdmlkZXMgcHJvZmVzc2lvbmFsIGdyYWRlIE1MIGFuZCBzY2FsaWJpbGl0eS4gSXQgYWxzbyBoYXMgYSBhdXRvLm1sIGZ1bmN0aW9uIHRvIGF1dG9tYXRpY2FsbHkgdHJhaW4gbW9kZWwgd2l0aG91dCBwcm92aWRpbmcgYSBzcGVjaWZpYyBhbGdvcml0aG0uDQoNCmBgYHtyfQ0KaDJvLmluaXQoKQ0KYGBgDQoNCkxldCdzIGNyZWF0ZSBhIHZhbGlkYXRpb24gZGF0YXNldC4gU2luY2UsIEgyTyBkZWFscyB3aXRoIEgyT0ZyYW1lLCB3ZSBuZWVkIHRvIGNvbnZlcnQgb3VyIGRhdGEgdG8gdGhhdCBmb3JtYXQuDQoNCmBgYHtyfQ0KaW5kZXggPC0gY3JlYXRlRGF0YVBhcnRpdGlvbih0cmFpbl9maW5hbCR3ZW50X29uX2JhY2tvcmRlciwgcD0wLjgsIGxpc3QgPSBGQUxTRSkNCnRyYWluIDwtIHRyYWluX2ZpbmFsW2luZGV4LF0NCnZhbGlkIDwtIHRyYWluX2ZpbmFsWy1pbmRleCxdDQoNCnRyYWluX2gybyA8LSBhcy5oMm8odHJhaW4pDQp2YWxpZF9oMm8gPC0gYXMuaDJvKHZhbGlkKQ0KdGVzdF9oMm8gPC0gYXMuaDJvKHRlc3QpDQpgYGANCg0KTm93LCB0aGF0IHdlIGhhdmUgdHJhbnNmb3JtZWQgdGhlIGRhdGEsIHdlIGFyZSBnb2luZyB0byB1c2UgYXV0b21sIGZ1bmN0aW9uIGZyb20gaDJvIHBhY2thZ2UgdG8gdHJhaW4gbW9kZWwuDQoNCmBgYHtyfQ0KeSA8LSAid2VudF9vbl9iYWNrb3JkZXIiDQp4IDwtIHNldGRpZmYobmFtZXModHJhaW5faDJvKSwgeSkNCg0KbW9kZWxzX2gybyA8LSBoMm8uYXV0b21sKHg9eCwgeT15LCB0cmFpbmluZ19mcmFtZSA9IHRyYWluX2gybywgdmFsaWRhdGlvbl9mcmFtZSA9IHZhbGlkX2gybywgbGVhZGVyYm9hcmRfZnJhbWUgPSB0ZXN0X2gybywgbWF4X3J1bnRpbWVfc2VjcyA9IDYwKQ0KYGBgDQoNCk1vZGVsIGhhcyBiZWVuIHRyYWluZWQuIExldCdzIGV4dHJhY3Qgb3VyIG1vZGVsLg0KDQpgYGB7cn0NCmZpdC5oMm8gPC0gbW9kZWxzX2gyb0BsZWFkZXINCmBgYA0KDQpMZXQncyBwcmVkaWN0IHVzaW5nIHRoZSBtb2RlbC4NCg0KYGBge3J9DQpwcmVkLmgybyA8LSBoMm8ucHJlZGljdChmaXQuaDJvLG5ld2RhdGEgPSB0ZXN0X2gybykNCmFzLmRhdGEuZnJhbWUocHJlZC5oMm8pDQpgYGANCg0KV2UgaGF2ZSBvYnRhaW5lZCB0aGUgcHJlZGljdGlvbnMuIE5vdywgSDJPIHByb3ZpZGVzIGEgZnVuY3Rpb24gaDJvLnBlcmZvcm1hbmNlIHdoaWNoIGNhbiBoZWxwIHRvIGFzc2VzcyB0aGUgcGVyZG9ybWFuY2UuIExldCd0IHRyeS4NCg0KYGBge3J9DQpwZXJmb3JtYW5jZS5oMm8gPC0gaDJvLnBlcmZvcm1hbmNlKGZpdC5oMm8sIG5ld2RhdGEgPSB0ZXN0X2gybykNCmgyby5tZXRyaWMocGVyZm9ybWFuY2UuaDJvKQ0KYGBgDQoNCkxldCdzIHNlZSB0aGUgQVVDIG1ldHJpYyB3aGljaCBpcyB3aWRlbHkgdXNlZCBpbiB0aGUgYnVzaW5lc3MgYW5kIGNoYWxsZW5nZXMgbGlrZSBLYWdnbGUncy4NCg0KYGBge3J9DQpoMm8uYXVjKHBlcmZvcm1hbmNlLmgybykNCmBgYA0KDQpJdCBpcyA5MSUgd2hpY2ggaXMgdmVyeSBnb29kIGNvbnNpZGVyaW5nIHRoZSBtaW5pbXVtIGVmZm9ydCBwdXQgaW4uDQoNClNvLCB3ZSBzYXcgaG93IGgybyBjYW4gaGVscCB1cyBnZXQgZ29vZCBwZXJmb3JtaW5nIG1vZGVsLiBJdCBpcyBhbHNvIHNjYWxhYmxlIHNvIHlvdSBjYW4gcHV0IHRoZSBtb2RlbCBpbiBwcm9kdWN0aW9uLiBBbHNvLCB3ZSBzYXcgaG93IHRvIGhhbmRsZSBpbWJhbGFuY2VkIGRhdGFzZXQgd2hlbmV2ZXIgcmVxdWlyZWQu